Guest post on Watts Up With That by Jerome Ravetz of Oxford University, UK
At the end of January 2010 two distinguished scientific institutions shared headlines with Tony Blair over accusations of the dishonest and possibly illegal manipulation of information. Our ‘Himalayan glaciers melting by 2035’ of the Intergovernmental Panel on Climate Change is matched by his ‘dodgy dossier’ of Saddam’s fictitious subversions. We had the violations of the Freedom of Information Act at the University of East Anglia; he has the extraordinary 70-year gag rule on the David Kelly suicide file. There was ‘the debate is over’ on one side, and ‘WMD beyond doubt’ on the other. The parallels are significant and troubling, for on both sides they involve a betrayal of public trust.
Politics will doubtless survive, for it is not a fiduciary institution; but for science the dangers are real. Climategate is particularly significant because it cannot be blamed on the well-known malign influences from outside science, be they greedy corporations or an unscrupulous State. This scandal, and the resulting crisis, was created by people within science who can be presumed to have been acting with the best of intentions. In the event of a serious discrediting of the global-warming claims, public outrage would therefore be directed at the community of science itself, and (from within that community) at its leaders who were either ignorant or complicit until the scandal was blown open. If we are to understand Climategate, and move towards a restoration of trust, we should consider the structural features of the situation that fostered and nurtured the damaging practices. I believe that the ideas of Post-Normal Science (as developed by Silvio Funtowicz and myself) can help our understanding.
There are deep problems of the management of uncertainty in science in the policy domain, that will not be resolved by more elaborate quantification. In the gap between science and policy, the languages, their conventions and their implications are effectively incommensurable. It takes determination and skill for a scientist who is committed to social responsibility, to avoid becoming a ‘stealth advocate’ (in the terms of Roger Pielke Jr.). When the policy domain seems unwilling or unable to recognise plain and urgent truths about a problem, the contradictions between scientific probity and campaigning zeal become acute. It is a perennial problem for all policy-relevant science, and it seems to have happened on a significant scale in the case of climate science. The management of uncertainty and quality in such increasingly common situations is now an urgent task for the governance of science.
We can begin to see what went seriously wrong when we examine what the leading practitioners of this ‘evangelical science’ of global warming (thanks to Angela Wilkinson) took to be the plain and urgent truth in their case. This was not merely that there are signs of exceptional disturbance in the ecosphere due to human influence, nor even that the climate might well be changing more rapidly now than for a very long time. Rather, they propounded, as a proven fact, Anthropogenic Carbon-based Global Warming. There is little room for uncertainty in this thesis; it effectively needs hockey-stick behaviour in all indicators of global temperature, so that it is all due to industrialisation. Its iconic image is the steadily rising graph of CO2 concentrations over the past fifty years at the Mauna Loa volcano in Hawaii (with the implicit assumption that CO2 had always previously been at or below that starting level). Since CO2 has long been known to be a greenhouse gas, with scientific theories quantifying its effects, the scientific case for this dangerous trend could seem to be overwhelmingly simple, direct, and conclusive.
In retrospect, we can ask why this particular, really rather extreme view of the prospect, became the official one. It seems that several causes conspired. First, the early opposition to any claim of climate change was only partly scientific; the tactics of the opposing special interests were such as to induce the proponents to adopt a simple, forcefully argued position. Then, once the position was adopted, its proponents became invested in it, and attached to it, in all sorts of ways, institutional and personal. And I suspect that a simplified, even simplistic claim, was more comfortable for these scientists than one where complexity and uncertainty were acknowledged. It is not merely a case of the politicians and public needing a simple, unequivocal message. As Thomas Kuhn described ‘normal science’, which (as he said) nearly all scientists do all the time, it is puzzle-solving within an unquestioned framework or ‘paradigm’. Issues of uncertainty and quality are not prominent in ‘normal’ scientific training, and so they are less easily conceived and managed by its practitioners.
Now, as Kuhn saw, this ‘normal’ science has been enormously successful in enabling our unprecedented understanding and control of the world around us. But his analysis related to the sciences of the laboratory, and by extension the technologies that could reproduce stable and controllable external conditions for their working. Where the systems under study are complicated, complex or poorly understood, that ‘textbook’ style of investigation becomes less, sometimes much less, effective. The near-meltdown of the world’s financial system can be blamed partly on naďvely reductionist economics and misapplied simplistic statistics. The temptation among ‘normal’ scientists is to work as if their material is as simple as in the lab. If nothing else, that is the path to a steady stream of publications, on which a scientific career now so critically depends. The most obvious effect of this style is the proliferation of computer simulations, which give the appearance of solved puzzles even when neither data nor theory provide much support for the precision of their numerical outputs. Under such circumstances, a refined appreciation of uncertainty in results is inhibited, and even awareness of quality of workmanship can be atrophied.
In the course of the development of climate-change science, all sorts of loose ends were left unresolved and sometimes unattended. Even the most fundamental quantitative parameter of all, the forcing factor relating the increase in mean temperature to a doubling of CO2, lies somewhere between 1 and 3 degrees, and is thus uncertain to within a factor of 3. The precision (at about 2%) in the statements of the ‘safe limits’ of CO2 concentration, depending on calculations with this factor, is not easily justified. Also, the predictive power of the global temperature models has been shown to depend more on the ‘story line’ than anything else, the end-of century increase in temperature ranging variously from a modest one degree to a catastrophic six. And the ‘hockey stick’ picture of the past, so crucial for the strict version of the climate change story, has run into increasingly severe problems. As an example, it relied totally on a small set of deeply uncertain tree-ring data for the Medieval period, to refute the historical evidence of a warming then; but it needed to discard that sort of data for recent decades, as they showed a sudden cooling from the 1960’s onwards! In the publication, the recent data from other sources were skilfully blended in so that the change was not obvious; that was the notorious ‘Nature trick’ of the CRU e-mails.
Even worse, for the warming case to have political effect, a mere global average rise in temperature was not compelling enough. So that people could appreciate the dangers, there needed to be predictions of future climate - or even weather - in the various regions of the world. Given the gross uncertainties in even the aggregated models, regional forecasts are really beyond the limits of science. And yet they have been provided, with various degrees of precision. Those announced by the IPCC have become the most explosive.
As all these anomalies and unsolved puzzles emerged, the neat, compelling picture became troubled and even confused. In Kuhn’s analysis, this would be the start of a ‘pre-revolutionary’ phase of normal science. But the political cause had been taken up by powerful advocates, like Al Gore. We found ourselves in another crusading ‘War’, like those on (non-alcoholic) Drugs and ‘Terror’. This new War, on Carbon, was equally simplistic, and equally prone to corruption and failure. Global warming science became the core element of this major worldwide campaign to save the planet. Any weakening of the scientific case would have amounted to a betrayal of the good cause, as well as a disruption of the growing research effort. All critics, even those who were full members of the scientific peer community, had to be derided and dismissed. As we learned from the CRU e-mails, they were not considered to be entitled to the normal courtesies of scientific sharing and debate. Requests for information were stalled, and as one witty blogger has put it, ‘peer review’ was replaced by ‘pal review’.
Even now, the catalogue of unscientific practices revealed in the mainstream media is very small in comparison to what is available on the blogosphere. Details of shoddy science and dirty tricks abound. By the end, the committed inner core were confessing to each other that global temperatures were falling, but it was far too late to change course. The final stage of corruption, cover-up, had taken hold. For the core scientists and the leaders of the scientific communities, as well as for nearly all the liberal media, ‘the debate was over’. Denying Climate Change received the same stigma as denying the Holocaust. Even the trenchant criticisms of the most egregious errors in the IPCC reports were kept ‘confidential’. And then came the e-mails.
We can understand the root cause of Climategate as a case of scientists constrained to attempt to do normal science in a post-normal situation. But climate change had never been a really ‘normal’ science, because the policy implications were always present and strong, even overwhelming. Indeed, if we look at the definition of ‘post-normal science’, we see how well it fits: facts uncertain,values in dispute, stakes high, and decisions urgent. In needing to treat Planet Earth like a textbook exercise, the climate scientists were forced to break the rules of scientific etiquette and ethics, and to play scientific power-politics in a way that inevitably became corrupt. The combination of non-critical ‘normal science’ with anti-critical ‘evangelical science’ was lethal. As in other ‘gate’ scandals, one incident served to pull a thread on a tissue of protective plausibilities and concealments, and eventually led to an unravelling. What was in the e-mails could be largely explained in terms of embattled scientists fighting off malicious interference; but the materials ready and waiting on the blogosphere provided a background, and that is what converted a very minor scandal to a catastrophe.
Consideration of those protective plausibilities can help to explain how the illusions could persist for so long until their sudden collapse. The scientists were all reputable, they published in leading peer-reviewed journals, and their case was itself highly plausible and worthy in a general way. Individual criticisms were, for the public and perhaps even for the broader scientific community, kept isolated and hence muffled and lacking in systematic significance. And who could have imagined that at its core so much of the science was unsound? The plausibility of the whole exercise was, as it were, bootstrapped. I myself was alerted to weaknesses in the case by some caveats in Sir David King’s book The Hot Topic; and I had heard of the hockey-stick affair. But even I was carried along by the bootstrapped plausibility, until the scandal broke. (I have benefited from the joint project on plausibility in science of colleagues in Oxford and at the Arizona State University).
Part of the historic significance of Climategate is that the scandal was so effectively and quickly exposed. Within a mere two months of the first reports in the mainstream media, the key East Anglia scientists and the Intergovernmental Panel on Climate Change were discredited. Even if only a fraction of their scientific claims were eventually refuted, their credibility as trustworthy scientists was lost. To explain how it all happened so quickly and decisively, we have the confluence of two developments, one social and the other technical. For the former, there is a lesson of Post-Normal Science, that we call the Extended Peer Community. In traditional ‘normal’ science, the peer community, performing the functions of quality-assurance and governance, is strictly confined to the researchers who share the paradigm. In the case of ‘professional consultancy’, the clients and/or sponsors also participate in governance. We have argued that in the case of Post-Normal Science, the ‘extended peer community’, including all affected by the policy being implemented, must be fully involved. Its particular contribution will depend on the nature of the core scientific problem, and also on the phase of investigation. Detailed technical work is a task for experts, but quality-control on even that work can be done by those with much broader expertise. And on issues like the definition of the problem itself, the selection of personnel, and crucially the ownership of the results, the extended peer community has full rights of participation. This principle is effectively acknowledged in many jurisdictions, and for many policy-related problems. The theory of Post-Normal Science goes beyond the official consensus in recognising ‘extended facts’, that might be local knowledge and values, as well as unoffficially obtained information.
The task of creating and involving the extended peer community generally known as ‘participation’ has been recognised as difficult, with its own contradictions and pitfalls. It has grown haphazardly, with isolated successes and failures. Hitherto, critics of scientific matters have been relegated to a sort of samizdat world, exchanging private letters or writing books that can easily be ignored (as not being peer-reviewed) by the ruling establishment. This has generally been the fate of even the most distinguished and responsible climate-change critics, up to now. A well-known expert in uncertainty management, Jeroen van der Sluijs, explicitly condemned the ‘overselling of certainty’ and predicted the impending destruction of trust; but he received no more attention than did Nikolas Taleb in warning of the ‘fat tails’ in the probability distributions of securities that led to the Credit Crunch. A prominent climate scientist, Mike Hulme, provided a profound analysis in Why We Disagree About Climate Change, in terms of complexity and uncertainty. But since legitimate disagreement was deemed nonexistent, he too was ignored.
To have a political effect, the ‘extended peers’ of science have traditionally needed to operate largely by means of activist pressure-groups using the media to create public alarm. In this case, since the global warmers had captured the moral high ground, criticism has remained scattered and ineffective, except on the blogosphere. The position of Green activists is especially difficult, even tragic; they have been
‘extended peers’ who were co-opted into the ruling paradigm, which in retrospect can be seen as a decoy or diversion from the real, complex issues of sustainability, as shown by Mike Hulme. Now they must do some very serious re-thinking about their position and their role.
The importance of the new media of communications in mass politics, as in the various ‘rainbow revolutions is well attested. To understand how the power-politics of science have changed in the case of Climategate, we can take a story from the book Here Comes Everybody by Clay Shirkey. There were two incidents in the Boston U.S.A. diocese of the Roman Catholic Church, involving the shuffling of paeodophile priests around parishes. The first time, there was a criminal prosecution, with full exposure in the press, and then nothing happened. The second time, the outraged parents got on their cell phones and organised; and eventually Cardinal Archbishop Bernard Francis Law (who had started as a courageous cleric in the ‘60’s) had to leave for Rome in disgrace. The Climategate affair shows the importance of the new IT for science, as an empowerment of the extended peer community.
The well-known principle, ‘knowledge is power’ has its obverse, ‘ignorance is impotence’. And ignorance is maintained, or eventually overcome, by a variety of socio-technical means. With the invention of cheap printing on paper, the Bible could be widely read, and heretics became Reformers. The social activity of science as we know it expanded and grew through the age of printing. But knowledge was never entirely free, and the power-politics of scientific legitimacy remained quite stable for centuries. The practice of science has generally been restricted to a social elite and its occasional recruits, as it requires a prior academic education and a sufficiency of leisure and of material resources. With the new information technology, all that is changing rapidly. As we see from the ‘open source’ movement, many people play an active role in enjoyable technological development in the spare time that their job allows or even encourages. Moreover, all over IT there are blogs that exercise quality control on the industry’s productions. In this new knowledge industry, the workers can be as competent as the technicians and bosses. The new technologies of information enable the diffusion of scientific competence and the sharing of unofficial information, and hence give power to peer communities that are extended far beyond the Ph.D.s in the relevant subject-specialty. The most trenchant and effective critics of the ‘hockey stick’ statistics were a University-employed economist and a computer expert.
Like any other technology, IT is many-faceted. It is easily misused and abused, and much of the content of the blogosphere is trivial or worse. The right-wing political agendas of some climate sceptics, their bloggers and their backers, are quite well known. But to use their background or motivation as an excuse for ignoring their arguments, is a betrayal of science. The blogosphere interacts with other media of communication, in the public and scientific domains. Some parts are quite mainstream, others not. The Climategate blogosphere is as varied in quality as any other. Some leading scholars, like Roger Pielke, Jr. have had personal blogs for a long time. Some blogs are carefully monitored, have a large readership and are sampled by the mainstream media (such as the one on which this is posted, Wattsupwiththat.com). Others are less rigorous; but the same variation in quality can be found in the nominally peer-reviewed scientific literature. Keeping up with the blogosphere requires different skills from keeping up with traditional literature; it is most useful to find a summarising blog that fits one’s special interests, as well as a loyal correspondent, as (in my case) Roger ‘tallbloke’ Tattersall.
Some mainstream publications are now saying nice things about the blogosphere. Had such sentiments been expressed a while ago, the critical voices might have had a public hearing and the Climategate scandal might have been exposed before it became entrenched so disastrously. And now the critical blogosphere does not need to be patronised. Like any extension of political power, whether it be the right to believe, to protest, to vote, to form trades unions, or to be educated, it can lead to instabilities and abuses. But now the extended peer community has a technological base, and the power-politics of science will be different. I cannot predict how it will work out, but we can be confident that corruptions built on bootstrapped plausibility will be less likely in the future.
There is an important philosophical dimension to Climategate, a question of the relation of personal scientific ethics to objective scientific facts. The problem is created by the traditional image of science (as transmitted in scientific education) as ‘value-free’. The personal commitments to integrity, that are necessary for the maintenance of scientific quality, receive no mention in the dominant philosophy of science. Kuhn’s disenchanted picture of science was so troubling to the idealists (as Popper) because in his ‘normal’ science criticism had hardly any role. For Kuhn, even the Mertonian principles of ethical behaviour were effectively dismissed as irrelevant. Was this situation truly ‘normal’ - meaning either average or (worse) appropriate? The examples of shoddy science exposed by the Climategate convey a troubling impression. From the record, it appears that in this case, criticism and a sense of probity needed to be injected into the system by the extended peer community from the (mainly) external blogosphere.
The total assurance of the mainstream scientists in their own correctness and in the intellectual and moral defects of their critics, is now in retrospect perceived as arrogance. For their spokespersons to continue to make light of the damage to the scientific case, and to ignore the ethical dimension of Climategate, is to risk public outrage at a perceived unreformed arrogance. If there is a continuing stream of ever more detailed revelations, originating in the blogosphere but now being brought to a broader public, then the credibility of the established scientific authorities will continue to erode. Do we face the prospect of the IPCC reports being totally dismissed as just more dodgy dossiers, and of hitherto trusted scientists being accused of negligence or worse? There will be those who with their own motives will be promoting such a picture. How can it be refuted?
And what about the issue itself? Are we really experiencing Anthropogenic Carbon-based Global Warming? If the public loses faith in that claim, then the situation of science in our society will be altered for the worse. There is very unlikely to be a crucial experience that either confirms or refutes the claim; the post-normal situation is just too complex. The consensus is likely to depend on how much trust can still be put in science. The whole vast edifice of policy commitments for Carbon reduction, with their many policy prescriptions and quite totalitarian moral exhortations, will be at risk of public rejection. What sort of chaos would then result? The consequences for science in our civilisation would be extraordinary.
To the extent that the improved management of uncertainty and ignorance can remedy the situation, some useful tools are at hand. In the Netherlands, scholars and scientists have developed ‘Knowledge Quality Assessment’ methodologies for characterising uncertainty in ways that convey the richness of the phenomenon while still performing well as robust tools of analysis and communication. Elsewhere, scholars are exploring methods for managing disagreement among scientists, so that such post-normal issues do not need to become so disastrously polarised. A distinguished scholar, Sheila Jasanoff, has called for a culture of humility among scientists, itself a radical move towards a vision of a non-violent science. Scientists who have been forced to work on the blogosphere have had the invaluable experience of exclusion and oppression; that could make it easier for them to accept that something is seriously wrong and then to engage in the challenging moral adventures of dealing with uncertainty and ignorance. The new technologies of communications are revolutionising knowledge and power in many areas. The extended peer community of science on the blogosphere will be playing its part in that process. Let dialogue commence!
Unquestionably the world’s final authority on the subject, the Intergovernmental Panel on Climate Change’s findings and recommendations have formed the bedrock of literally every climate-related initiative worldwide for more than a decade. Likewise, virtually all such future endeavors—be they Kyoto II, domestic cap-and-tax, or EPA carbon regulation, would inexorably be built upon the credibility of the same U.N. panel’s “expert” counsel. But a glut of ongoing recent discoveries of systemic fraud has rocked that foundation, and the entire man-made global warming house of cards is now teetering on the verge of complete collapse.
Simply stated, we’ve been swindled. We’ve been set up as marks by a gang of opportunistic hucksters who have exploited the naďvely altruistic intentions of the environmental movement in an effort to control international energy consumption while redistributing global wealth and (in many cases) greedily lining their own pockets in the process.
Perhaps now, more people will finally understand what many have known for years: Man-made climate change was never really a problem—but rather, a solution.
For just as the science of the IPCC has been exposed as fraudulent, so have its apparent motives. The true ones became strikingly evident when the negotiating text for the “last chance to save the planet” International Climate Accord [PDF], put forth in Copenhagen in December, was found to contain as many paragraphs outlining the payment of “climate debt” reparations by Western nations under the watchful eye of a U.N.-controlled global government as it did emission reduction schemes.
Icecap Note: in the story here, Marc then goes into great detail on the history of the global warming movement and the increasing level of fraud. We pick up now in November 2009.
Caught with their Green Thumbs on the Scale
Most readers are likely aware that in November of last year, a folder containing documents, source code, data, and e-mails was somehow misappropriated from the University of East Anglia’s Climate Research Unit (CRU). The so-called “Climategate” emails disclosed an arrogant mockery of the peer review process as well a widespread complicity in and acceptance among climate researchers to hiding and manipulating data unfriendly to the global warming agenda. The modeling source code—as I reported here—contained routines which employed a number of “fudge factors” to modify the results of data series—again, to bias results to the desired outcome. And this, coupled with the disclosure of the Jones “hide the decline” e-mail, provided more evidence that MBH98—and ergo unprecedented 20th-century warming—is a fraud.
The following month, the Moscow-based Institute of Economic Analysis (IEA) issued a report claiming that the Hadley Center for Climate Change had probably tampered with Russian climate data. Apparently, Hadley ignored data submitted by 75% of Russian stations, effectively omitting over 40% of Russian territory from global temperature calculations—not coincidentally, areas that didn’t “show any substantial warming in the late 20th-century and the early 21st-century.”
But Climategate was only the tip of the iceberg. An AR4 warning that unchecked climate change will melt most of the Himalayan glaciers by 2035 was found to be lifted from an erroneous World Wildlife Federation (WWF) report and misrepresented as peer-reviewed science. IPCC Chairman Rajendra Pachauri attempted to parry this “mistake” by accusing the accusers at the Indian environment ministry of “arrogance” and practicing “voodoo science” in issuing a report [PDF] disputing the IPCC. But one in his own ranks, Dr Murari Lal, the coordinating lead author of the chapter making the claim, had the astoundingly bad manners to admit that he knew all along that it “did not rest on peer-reviewed scientific research.” Apparently, so had Pachauri, who continued to lie about it for months so as not to sully the exalted AR4 immediately prior to Copenhagen.
And “Glaciergate” opened the floodgates to other serious misrepresentations in AR4, including a boatload of additional non-peer-reviewed projections pulled directly from WWF reports. These included discussions on the effects of melting glaciers on mudflows and avalanches, the significant damages climate change will have on selected marine fish and shellfish, and even assessing global-average per-capita “ecological footprints.” It should be noted here that IPCC rules specifically disqualify all non-peer-reviewed primary sources.
Nonetheless, Chapter 13 of the WG2 report stated that forty percent of Amazonian forests are threatened by climate change. And it also cited a WWF piece as its source—this one by two so-called “experts,” who incidentally are actually environmental activists. What’s more, the WWF study dealt with anthropogenic forest fires, not global warming, and barely made mention of Amazonian forests at all. Additionally, the WWF’s figures were themselves based on a Nature paper [PDF] studying neither global warming nor forest fires, but rather the effects of logging on rain forests. So the IPCC predicted climate change-caused 40% forest destruction based on a report two steps upstream which concluded that “[l]ogging companies in Amazonia kill or damage 10-40% of the living biomass of forests through the harvest process.”
Adding to the glacial egg on the AR4 authors’ faces was the statement that observed reductions in mountain ice in the Andes, Alps, and Africa were being caused by global warming. It turns out that one of the two source papers cited was actually a mountain-climbers’ magazine. Actually, this is a relatively authoritative source compared to the other: a dissertation from a Swiss college student based on his interviews with mountain guides in the Alps.
The 2007 green bible also contained a gross exaggeration in its citation of Muir-Wood et al., 2006’s study on global warming and natural disasters. The original stated that “a small statistically significant trend was found for an increase in annual catastrophe loss since 1970 of 2% per year.” But the AR4 synthesis report stated that more “heavy precipitation” is “very likely” and that an “increase in tropical cyclone intensity” is “likely” as temperatures rise.
Perhaps the most dumbfounding AR4 citation (so far) was recently discovered by Climatequotes.com. It appears that a WG2 warning that “[t]he multiple stresses of climate change and increasing human activity on the Antarctic Peninsula represent a clear vulnerability and have necessitated the implementation of stringent clothing decontamination guidelines for tourist landings on the Antarctic Peninsula” originated from and was attributed to a guide for Antarctica tour operators on decontaminating boots and clothing. Really.
And here’s one you may not have heard yet. A paper published last December by Lockart, Kavetski, and Franks rebuts the AR4 WG1 assertion that CO2-driven higher temperatures drive higher evaporation and thereby cause droughts. The study claims they got it backwards, as higher air temperatures are in fact driven by the lack of evaporation (as occurs during drought). I smell another “-gate” in the works.
And yet, perhaps the greatest undermining of IPCC integrity comes from a recent study, which I’ve summarized here, challenging the global temperature data reported by its two most important American allies: NASA and the National Oceanic and Atmospheric Administration (NOAA). As these represent the readings used by most climate analysis agencies, including the IPCC, the discovery by meteorologist Joe D’Aleo and computer expert E.M. Smith that they’ve been intentionally biased to the warm side since 1990 puts literally every temperature-related climate report released since then into question.
...Along with, of course, any policy decisions based on their content.
It’s Time for some Real Climate Justice
Here in the states, left-leaning policymakers and their cohorts in the MSM have thus far all but ignored both the reality and implications of the fraud unveiled by Climategate, Glaciergate, Amazongate, and the myriad other AGW-hyping scandals that seem to surface almost daily. Remarkably, most continue to discuss “climate pollution” and “carbon footprints” and the “tragedy” of Copenhagen’s failure, even as the global warming fever of their own contagion plunges precipitously. The president appears equally deluded, as passing a “comprehensive energy and climate bill” (as though the climate might somehow be managed by parliamentary edict) was one of the many goals he set forth in his State of the Union address last week.
But their denial will be short-lived as even the last vestiges of the green lie they so desperately cling to evaporate under the heat of the spotlight suddenly shining upon them.
For outside of the U.S., many news organizations and politicians already get it. Some are calling for Pachauri’s resignation, and others for a full investigation into his possible financial conflicts of interest. There have also been demands for a complete reassessment of all IPCC reports, including a suggestion from the Financial Times that, given the IPCC’s “central role in climate science,” an independent auditor must be commissioned to “look at all the claims in the 2007 report and remove any that were not soundly based.”
At least one American, AGW believer Walter Russell Mead of American Interest Online, agrees: “A highly publicized effort that includes serious skeptics and has bipartisan backing is the only way to get American public opinion on board the climate change train.” And China’s lead climate change negotiator, Xie Zhenhua, suggested that “contrarian views” be included in 2014’s AR5.
But when the Australian suddenly recommended “applying a healthy degree of scepticism to scientific claims that drive policy,” paleoclimatologist Bob Carter told me he just couldn’t help laughingly writing the editors to welcome them to the ranks of the majority of scientists who “practice exactly the technique that [they] belatedly recommend”—the skeptics.
Indeed, this abrupt challenge to their own “consensus” mantra that they’ve spoon-fed the public for years rings decidedly hollow. Those “serious skeptics” and the holders of those “contrarian views” are the same scientists the IPCC deliberately excluded from its proceedings with impunity. They’re the same people whom the media have ignored or ridiculed for years, along with their conventions—like Heartland’s ICCC 1, 2, and 3—and innumerable contrarian reports. In fact, a superb rebuttal to AR4, Climate Change Reconsidered: The 2009 Report of the Nongovernmental International Panel on Climate Change (NIPCC)—produced by Dr. S. Fred Singer, Dr. Craig Idso, and thirty fellow scientists—has received no MSM attention whatsoever, despite its availability here since last June.
Besides, the time for credibility makeovers has long passed. As U.K. Professor Phillip Stott recently observed:
[A]s ever, capitalism has read the runes, with carbon-trading posts quietly being shed, ‘Green’ jobs sidelined, and even big insurance companies starting to hedge their own bets against the future of the Global Warming Grand Narrative. These rats are leaving the sinking ship far faster than any politician, many of whom are going to be abandoned, left, still clinging to the masts, as the Good Ship ‘Global Warming’ founders on titanic icebergs in the raging oceans of doubt and delusion.
Stott compared the IPCC’s fall to that of the Berlin Wall. And he’s spot-on—for just as the latter symbolized the doom of European communism, so does the former signal the death knell for global socialist-environmentalism.
Let’s get real—given the enormousness of the booty these grifters attempted to extort from the entire developed world, not to mention the extraordinary depth of their hubris, it isn’t rehabilitation that’s required here, but swift justice. In 2006, IPCC cheerleader Grist Magazine’s staff writer David Roberts received a pass when he called for the Nuremberg-style war-crimes trials for the “bastards” who were members of the global warming “denial industry.” Surely, it’s now clear that the members of the global warming “fraud industry” are the true “bastards” who should be hauled before an international tribunal for crimes against humanity...any tribunal, that is, other than the U.N.’s own International Criminal Court in The Hague.
We’ll deal with their accessories-after-the-fact in the Congress, the White House—and consequently, the EPA—in due time.
And the first such judgment is already scheduled—for November.
Scotland has suffered some of the coldest winter months in almost 100 years, the Met Office has confirmed.
By combining the temperatures of January and December it showed they were the coldest since 1914 - the year data started being logged.
Elsewhere, it was the coldest December and January in Northern Ireland since 1962/63 and the coldest in England and Wales since 1981/82.
Sub-zero temperatures and snow blew into the UK from mid-December. The average minimum overnight temperature for January is usually at freezing point, but in Scotland it was regularly below -5C.
Salt grit shortages were reported throughout December and January and the tiny Highlands village of Altnaharra recorded one January night temperature of -22.3C.
Although Scotland has had the coldest combined December and January on record, temperatures would need to go lower to beat the coldest two month winter combination. January and February in 1963 was the coldest on record north of the border.
MEANWHILE...Cold Snap Causes Deaths in Eastern Europe, Germany Speigel on-line story here
Cold weather in Germany and Eastern Europe in recent days has caused deaths and major disruption to transportation systems. Parts of Europe have been snow-covered for a month, but that coating turned into a layer of ice in many countries in recent days.
A continuing cold snap across parts of Europe over the weekend and into Monday caused the deaths of more than 40 people in Romania, Bulgaria and Poland. It’s a cold spell that also stretched across much of Germany, leaving people here shivering as temperatures plunged as low as -15 degrees Celsius (5 degrees Fahrenheit) early on Monday morning.
The frigid temperatures transformed images of a snowy winter wonderland into an icy landscape in Germany. Cold weather caused lakes, rivers and streams to freeze over and ice to collect on roadways, rail lines and airport runways, disrupting transportation and causing accidents. Several dozen injuries and at least two deaths occurred as a result of road accidents on Monday. Traffic jams were also reported across the country, and at Frankfurt airport, Germany’s largest, over 150 flights had to be cancelled because of runway conditions. Numerous other flights were delayed.
Deadly Cold Strikes Eastern Europe
Germany’s cold spell, however, has been minor compared to temperatures being experienced in Eastern Europe. A government spokesperson in Bucharest (Subway Station shown) reported that ice cold temperatures of -34 degrees Celsius caused the deaths of 11 people in Romania in just 24 hours, with a total of 22 deaths registered in the last five days as a result of the cold.
The country’s social services have reported that 15,000 people in Romania are homeless, with 5,000 living in the capital city of Bucharest alone. Authorities have ordered hospitals to take in as many as possible to escape the cold.
“Most of the dead are older homeless people, “ said Raed Arafat, a senior official in the Romanian Health Ministry. “We try to find endangered people and bring them to safety, but in some cases we miss people,” he said, according to French news agency AFP.
Meanwhile, in Poland 11 people died during what has so far been the country’s coldest night this winter, with temperatures as low as -31 degrees Celsius. In total, 16 people died over the weekend from freezing temperatures, the government in Warsaw reported.
With temperatures as low as -29 degrees Celsius, Bulgaria reported the lowest temperatures for the region in the past 50 years. Three deaths linked to the cold weather have been reported since Saturday. Meteorologists also warn that waters in the bay of the Black Sea could freeze for the first time since the winter of 1942-1943.
--------------------------
Climategate Necessary to Cover Incorrect Climate Basics of IPCC By Dr. Tim Ball, Canada Free Press
Canada and the US announced new targets for carbon reduction that are completely unnecessary. It is madness and ultimately destructive to western society but what the perpetrators want. Despite exposure of the complete corruption of the science they continue to assume CO2 is a problem. UN Climate chief Yves De Boer said, “what’s happened, it’s unfortunate, it’s bad, it’s wrong, but I don’t think it has damaged the basic science.”
British Climate Secretary Ed Miliband said, “It’s right that there’s rigour applied to all the reports about climate change, but I think it would be wrong that when a mistake is made it’s somehow used to undermine the overwhelming picture that’s there,” It’s not one mistake but a complete fabrication of every aspect of the Intergovernmental Panel on Climate Change (IPCC) Reports. In addition, science is only correct when accurate predictions are made and the IPCC have been wrong in every single one. Miliband’s thinking helps explain why the UK is on the brink of economic disaster and needs a diversion. It is said, despite the disclosures, because the objective of eliminating fossil fuels and destroying industrial economies is still pursued.
What they don’t understand or choose to ignore is that the basic science was wrong from the start.
Climategate Corruption: Custom Made Science
Corruption disclosed was necessary because the science and the evidence didn’t fit what they wanted. They made the science fit the political goals and stopped at nothing to achieve the end. They succeeded, because beyond manipulations that duped politicians, media and most of the public, they knew many scientists who participated did not understand climate science. Blinded by career ambitions and large funding they ignored what was going on or lacked the expertise to know. Now some scientists incorrectly claim the basic evidence is still valid.
Politicians and political leaders worldwide accepted and adopted these reports as their political Bible. Most of them still don’t understand what went on and therefore failed to react properly. Scientists in political positions support them in their chosen ignorance manifest in inappropriate reactions. John Beddington, science advisor to the UK government and professor of applied population biology demonstrates his lack of understanding of climate science. He says, “It’s unchallengeable that CO2 traps heat and warms the Earth and that burning fossil fuels shoves billions of tonnes of CO2 into the atmosphere. But where you can get challenges is on the speed of change.” There are serious questions about the role of CO2 as a greenhouse gas. For example, the IPCC claimed CO2 stays in the atmosphere for up to 100 years but we know that this residency time is wrong. It is between five and six years. The duration was an essential part of the political game to increase pressure for action: even if we stopped CO2 production right now the impact would be felt for decades. There’s also the troubling fact that in every record of any duration for any time in the Earth’s history temperature change precedes CO2 change. We have little idea of the greenhouse effect when we have no understanding of the role of water vapor. His argument about the speed of change is not an issue either. They made it one by claiming current change is faster than in the past. It isn’t.
Beddington’s comments show he doesn’t understand the scientific method. He said, “I don’t think it’s healthy to dismiss proper skepticism.” How does he distinguish between skepticisms? What is “proper”? All scientists are skeptics and all of their questions and inputs are healthy. The CRU gang and government agencies actively excluded skeptics and skepticism. The use of the term is dismissive and in a backhanded way acknowledges their behavior.
Complaints about the control, typified by CRU Director Phil Jones’s statement he would keep certain papers out even if it meant changing the peer-reviewed process, led to an external review process. It was a sham, a public relations exercise allowing them to say they were inclusive of skeptics, when they changed virtually nothing. I could never discover who chose the reviewers. Virtually none of the corrections amendments and additions proposed by the external reviewers were included. One review editor claimed he had eliminated files when pushed for an accounting.
Powerful Players Still Active
Recently a paper published in Science announces, “climate scientists have overlooked a major cause of global warming and cooling, a new study reveals today.” No they haven’t, only the scientists involved in the IPCC have overlooked it. We then have a quote from Dr. Susan Solomon of the US National Oceanic and Atmospheric Administration (NOAA) “current climate models do a remarkable job on water vapor near the surface but this is different it’s a thin wedge of the upper atmosphere that packs a wallop from one decade to the next in a way we didn’t expect.”
No, the current climate models do not do a remarkable job. In fact the entire issue of water vapor as a greenhouse gas is essentially ignored and badly handled. Dr. Solomon was a co-chair of the Working Group 1, the Scientific Group of the IPCC. This means responsibility for the content of their report. Solomon was in direct communication with the people at CRU. Dr. Solomon failed to identify the serious problems now being disclosed. How does the certainty of the Reports and especially the Summary for Policymakers fit with this statement by Solomon? “We call this the 10/10/10 paper, 10 miles above your head, there is 10% less water vapor than there was 10 years ago. Why did the water vapor decrease? We really don’t know, we don’t have enough information yet.”
Despite this Solomon must play down the limitations the findings imply saying, “this isn’t an indication that predictions on global warming are overstated”. Yes it is, and the cooling trend since 2002 while CO2 levels increased is another. “This doesn’t mean there isn’t global warming.” “There’s no significant debate that it is warmer now than it was 100 years ago, due to anthropogenic (man-made) greenhouse gases.” The only place were that statement is true is in the computer models of the IPCC and we know they are useless. As CRU and IPCC member Trenberth said on October 12 2009, “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” Solomon apparently found an answer by claiming the upper level water vapor, “very likely made substantial contributions to the flattening of the global warming trend since about 2000.” And that is probably what the research is all about.
Dr. Solomon was also involved in the claim that CFCs were destroying the ozone when there was never any evidence. In fact, UV interacting with oxygen creates ozone but they incorrectly assumed UV was constant. We now know it varies by up to 200%.
Destructive Policies Are a Self-Perpetuating Behemoth
The hypothesis that CO2 was causing warming was accepted as fact before scientific testing began. They blocked most testing and challenges, but it never was the cause. As the evidence accumulated this was the case Climategate became necessary to corrupt and falsify. Now dismissal of Climategate ignores how the fundamental science is wrong. Climategate became necessary to achieve the political objective. Politicians and scientists who bought into the objectives don’t want to believe Climategate or abandon the benefits of appearing green, advancing careers, making money, or imposing taxes and political control on everybody to destroy western economies and democracy. Read more here.
The findings of the Intergovernmental Panel on Climate Change (IPCC) are often held up as representing “the consensus of scientists” - a pretty grandiose and presumptuous claim. And one that in recent days, weeks, and months, has been unraveling. So too, therefore, must all of the secondary assessments that are based on the IPCC findings - the most notable of which is the EPA’s Endangerment Finding - that “greenhouse gases taken in combination endanger both the public health and the public welfare of current and future generations.”
Recent events have shown, rather embarrassingly, that the IPCC is not “the” consensus of scientists, but rather the opinions of a few scientists (in some cases as few as one) in various subject areas whose consensus among themselves is then kludged together by the designers of the IPCC final product who a priori know what they want the ultimate outcome to be (that greenhouse gases are leading to dangerous climate change and need to be restricted). So clearly you can see why the EPA (who has a similar objective) would decide to rely on the IPCC findings rather than have to conduct an independent assessment of the science with the same predetermined outcome. Why go through the extra effort to arrive at the same conclusion?
The EPA’s official justification for its reliance on the IPCC’s findings is that it has reviewed the IPCC’s “procedures” and found them to be exemplary. Below is a look at some things, recently revealed, that the IPCC “procedures” have produced. These recent revelations indicate that the “procedures” are not infallible and that highly publicized IPCC results are either wrong or unjustified - which has the knock-on effect of rendering the IPCC an unreliable source of information. Unreliable doesn’t mean wrong in all cases, mind you, just that it is hard to know where and when errors are present, and as such, the justification that “the IPCC says so” is no longer sufficient (or acceptable).
Himalayan Glaciers
The IPCC has actually admitted that, in at least one area, its procedures have failed: “It has, however, recently come to our attention that a paragraph in the 938 page Working Group II contribution to the underlying assessment refers to poorly substantiated estimates of rate of recession and date for the disappearance of Himalayan glaciers. In drafting the paragraph in question, the clear and well-established standards of evidence, required by the IPCC procedures, were not applied properly. The Chair, Vice-Chairs, and Co-chairs of the IPCC regret the poor application of well-established IPCC procedures in this instance. This episode demonstrates that the quality of the assessment depends on absolute adherence to the IPCC standards, including thorough review of “the quality and validity of each source before incorporating results from the source into an IPCC Report”. We reaffirm our strong commitment to ensuring this level of performance.
It turns out, that in this case of Himalayan glaciers (apparently a favored topic of the IPCC head Dr. Rajendra Pachauri for raising funds for a non-profit that he heads), the IPCC’s findings - that the glaciers would largely disappear by the year 2035 (and endanger the water supply for hundreds of millions of people, as well as lead to increased avalanches and mudslides) as a result of anthropogenic global warming - were apparently based on some comments made by one researcher to the press. Those statements were later included in a World Wildlife Fund report that was the source cited by the IPCC.
Dr. Pachauri vehemently denied accusations of bad procedures when they were first made, going as far as calling evidence supporting the accusations “voodoo science”. Now, with the IPCC’s admission of its errors, an apology is being sought from Dr. Pauchari becasue of his remarks. The latest scuttlebutt on this issue is that several folks in the IPCC knew of these problems for some time, but that they allowed them to perpetuate anyway and that other attempts to correct them by other IPCC scientists were lost in the mail. This doesn’t speak so highly for the “procedures.”
Attribution of Increasing Damages to Rising Temperatures
Another issue which has gotten the IPCC’s attention in recent days has been its attribution of increasing weather-related losses to rising temperatures from human activities. In this case, the IPCC decided to issue a statement in which it denied that its “procedures” went astray - despite clear and growing evidence to the contrary. Such a denial in the face of mounting evidence seems like it could do even more harm than actually admitting another goof (but then again, how many major goofs can the IPCC really admit to without having to scrap the whole thing?).
The problems contained in the IPCC assessment have been well-documented by a series of posts by Roger Pielke Jr. at his blog (here, here, here, and here). Pielke Jr. builds a pretty convincing case that the IPCC gave short shrift to the body of peer-reviewed literature that concluded that a clear linkage between the rising levels of weather-related damages and rising temperatures was not yet demonstrable. The rapidly growing effect of demographic changes (population, wealth, etc.) overwhelms the influence of the weather and confuses issues of attribution. Yet, somehow, the IPCC’s assessment established that increasing losses were linked to rising temperatures. Pielke Jr. traces this to a “single non-peer reviewed” study “cherrypicked” from a “workshop.” Dr. Pielke Jr. had this to say about the IPCC’s recent statement upholding its findings and its claims that “In writing, reviewing, and editing this section, IPCC procedures were carefully followed to produce the policy-relevant assessment that is the IPCC mandate”:
Carefully followed procedures? Let’s review: (a) The IPCC relied on an unpublished, non-peer reviewed source to produce its top line conclusions in this section, (b) when at least two reviewers complained about this section, the IPCC ignored their complaints and invented a response characterizing my views. (c) When the paper that this section relied on was eventually published it explicitly stated that it could not find a connection between rising temperatures and the costs of disasters. Pielke Jr. continued:
This press release from the IPCC would have been a fine opportunity to set the scientific and procedural record straight and admit to what are obvious and major errors in content and process. Instead, it has decided to defend the indefensible, which any observer can easily see through. Of course there is no recourse here as the IPCC is unaccountable and there is no formal way to address errors in its report or its errors and misdirection via press release. Not a good showing by the IPCC.
Basically, with regards to this issue, the IPCC “consensus of scientists” isn’t even the consensus of the leading scientists actively studying the topic, but reflects the wishful thinking of one or two chapter authors and, no doubt, the IPCC designers, as well.
Medieval Warm Period
Many more examples of the IPCC “procedures” can be found courtesy of the Climategate emails.
For instance, in Chapter 6, the paleoclimate chapter of the IPCC’s most recent Fourth Assessment Report (AR4), it is the strong sentiment of one of the chapter’s coordinating lead author, Jonathan Overpeck, that he wants to dismiss the Medieval Warm Period (MWP) - a period of relatively high temperatures that occurred about a thousand years ago. If the MWP were found to be as warm as recent conditions, then the possibility that natural processes may play a larger role in recent warming is harder to ignore - thus the need to dismiss it. The task of doing so fell on Keith Briffa, who developed the contents of a special box in IPCC AR4 Chapter 6 that was apart from the main text and which focused on the WMP. Here’s the advice issued to Briffa by Overpeck:
“I get the sense that I’m not the only one who would like to deal a mortal blow to the misuse of supposed warm period terms and myths in the literature. The sceptics and uninformed love to cite these periods as natural analogs for current warming too - pure rubbish.
So, pls DO try hard to follow up on my advice provided in previous email. No need to go into details on any but the MWP, but good to mention the others in the same dismissive effort.”
Briffa attempted to complete his task by presenting a well-chosen collection of data that showed that while some proxy temperature reconstructions did show a warm period about 1,000 years ago, others did not. He concluded that a more complete picture indicated that the higher temperatures during the MWP were “heterogeneous” (regionalized), while the warming of the late 20th century has been “homogeneous” (i.e. much broader in spatial extent) -confirming that current conditions were likely unprecedented during the past 1,300 years. Briffa received congratulations for a job well done by Overpeck:
[A]ttached is Keith’s MWP box w/ my edits. It reads just great - much like a big hammer. Nice job.”
Thus, that conclusion driven by Overpeck’s desires, written by Briffa, perhaps reviewed by the chapter’s other authors (with varying degrees of knowledge about the subject), is now preserved as “the consensus of scientists”.
But apparently, that consensus isn’t accepted by other leading paleoclimate researchers. In a peer-reviewed article published in 2009 in the journal Climatic Change, paleo-researchers Jan Esper and David Frank carefully re-examined the same proxy temperature reconstructions used by Briffa and came to conclude that the IPCC was unwarranted in declaring that the temperatures during the MWP were more heterogeneous than now. Here is the abstract from that paper:
In their 2007 report, IPCC working group 1 refers to an increased heterogeneity of climate during medieval times about 1000 years ago. This conclusion would be of relevance, as it implies a contrast in the spatial signature and forcing of current warmth to that during the Medieval Warm Period. Our analysis of the data displayed in the IPCC report, however, shows no indication of an increased spread between long-term proxy records. We emphasize the relevance of sample replication issues, and argue that an estimation of long-term spatial homogeneity changes is premature based on the smattering of data currently available. See also the CO2 Science Medieval Warm Project Findings that show this was a global, very real warm period with support by 801 individual scientists from 476 separate research institutions in 43 different countries here.
Summary
So here we have multiple examples of the IPCC “procedures” and how “the consensus of scientists” is formed. In one case, the “consensus” was formed from comments made by a single scientist to the press; in another, the IPCC “consensus” conflicts with the consensus of scientists actually active in the topic of concern; and in the third case, the IPCC “consensus” is driven by the desires of one of the coordinating lead authors, and is now disputed by other members of the field. Others examples seem to be coming to the light daily (see here about conclusions regarding future agricultural productivity in Africa, or here about the IPCC pushing preconceived ideas).
In light of what we now know, I suggest that from now on, all IPCC products come with the following warning label: “The findings of the IPCC reports were developed in advance and furthered by a careful selection from whatever material could be found to support them. In some cases, supporting material was developed or fabricated where none could otherwise be located. As such, these findings may not necessarily reflect the true state of scientific understanding. Use at your own risk.” See post here.
See Pachauri confront his critics in this interview. Also see in this report how the Stern Report, a pillar of the global warming movement was quietly changed after first publication after the claims made could not be justified with scientific evidence. The Stern Review on the economics of climate change, which was commissioned by the Treasury, was greeted with headlines worldwide when it was published in October 2006. It contained dire predictions about the impact of climate change in different parts of the world.
There’s been an ongoing attack on the credibility of federal climate monitoring efforts that has been partly inspired by Anthony Watts. In 2007, Mr. Watts, a former TV weather forecaster, began recruiting legions of volunteers across the United States to inspect the thousands of weather stations - some in people’s back yards or in parking lots - that have for generations produced the raw data feeding into federal and independent efforts to track climate trends. The result was SurfaceStations.org. Its rogues gallery of photos of particularly suspect weather stations has been credited by many meteorologists and climatologists as a wake-up call for the need for better monitoring (an issue that affects many other realms, from monitoring acid rain to tracking flood risks):
The revelations fueled charges that all that asphalt and the like was inflating temperature estimates and thus conclusions that the nation’s climate was warming over all. One result was Mr. Watts’ far more popular blog, Watts Up With That, which has arguably become the dominant pipeline for any news or commentary challenging conclusions that greenhouse gases could dangerously disrupt climate.
Now, though, a new study by Matthew Menne and other scientists at the National Climatic Data Center, the federal office charged with tracking climate trends, directly challenges the underpinnings of arguments that Bad Weather Stations = Faulty Climate Conclusions. In essence, the paper, On the Reliability of the U.S. Surface Temperature Record (pdf), concludes that the instrument issues, as long acknowledged, are real, but the poor stations tend to have a slight cool bias, not a warm one.
Although the pending paper and antecedents have been debated before, the firmness of the new findings came to my attention via Jeff Masters’ excellent Wunderblog. There’s more useful analysis of the new paper on the Deltoid blog and Skeptical Science.
The findings are robust enough that a frequent critic of climate overstatement, Chip Knappenberger, has provisionally endorsed the findings and thrown cold water on the idea that bad weather stations undercut the picture of a warming continent.
I’ve seen a lot of Anthony Watts’ presentations and pictures of poorly sited thermometers, but never an analysis to conclusively show that there is a warm bias in the adjusted U.S. temperature record as a result. Yes, many sites are poorly situated and the temperature they read is impacted by things other than the larger-scale weather - but also, such things are being corrected for (or at least an attempt is being made to correct for them) by the various producers of a U.S. temperature history (i.e. Menne et al. at NCDC). So, while the raw data are undoubtedly a mixture of climate and non-climatic influences, the adjusted data presumably have more of a climate signal. The recent paper by Menne et al., seems to bear this out. Anthony Watts and colleagues, no doubt have an analysis of their own in the works. It’ll be interesting to see what their results show. The results from Menne et al. suggest that while a picture may be worth a thousand words, it is the data which actually tells the story. I await a formal analysis from Watts et al. and the story that it may tell.
I consulted Mr. Watts and with David Easterling at the National Climatic Data Center, who supervises the researchers who wrote the new paper, to get added perspective on the new paper. Here’s what they had to say:
Dr. Easterling of the climate center said, among other things, that Mr. Watts had been invited to participate in writing the paper, given that it drew on his weather-station data. “We invited him a number of times to participate in the work,” he said. “He declined.” Dr. Easterling said that the new analysis shows that the adjustments that are made to account for shifting patterns of climate-data collection (the same adjustments are among the targets of those challenging global warming evidence) are robust.
“I don’t want to disparage Watts,” Dr. Easterling added. “He did do us a service by highlighting the fact there are a lot of issues with some of these stations. We are trying to address these issues.” He said that, going forward, the evolving Climate Reference Network will largely eliminate the need for such adjustments in any case.
Mr. Watts sent a long illustrated e-mail making some points, which I set up as a Google Document here.
In addition, he wrote the following long note, in which he contradicts Dr. Easterling’s description of the exchanges between the federal center and Mr. Watts over participation in the paper. In it, he says a rebuttal is in the works. Stay tuned.
From Anthony Watts:
The appearance of the Menne et al paper was a bit of a surprise, since I had been offered collaboration by NCDC’s director in the fall. In typed letter on 9/22/09 Tom Karl wrote:
“We at NOAA/NCDC seek a way forward to cooperate with you, and are interested in joint scientific inquiry. When more or better information is available, we will reanalyze and compare and contrast the results.”
“If working together cooperatively is of interest to you, please let us know.”
I discussed it with Dr. Pielke and the rest of the team, which took some time since not all were available due to travel and other obligations. It was decided to take up NCDC on a collaboration offer.
On November 10, 2009, I sent a reply letter via Federal Express to Mr. Karl, advising him of this. In that letter I also reiterated my concerns about use of the preliminary data (43% surveyed), and spelled out very specific reasons why.
We all waited, but there was no reply from NCDC to our reply to offer of collaboration by Mr. Karl from his last letter.
Then we discovered that Menne et al had submitted a paper to JGR Atmospheres using my preliminary data and it was in press. This was a shock to me since I was told it was normal procedure for the person who gathered the primary data the paper was based on to have some input in the review process. Pielke concurs as you know.
NCDC uses data from one of the largest volunteer organization in the world, the NOAA Cooperative Observer Network. Yet NCDC director Karl, by not bothering to reply to our letter about an offer he initiated, and by not giving me any review process opportunity, extends professional discourtesy to my own volunteers and my team’s work.
==============
I’ll point out that NCDC threw up a roadblock early on in the project, one that required a legal argument to overcome.
To interest volunteers, we had to publish imagery of what had been found so far, as a way to garner interest. With no funding, there was no other option, this is why social networking was employed. Within two weeks of that announcement on Pielke’s blog of the project starting, on or about 6/24/07 NCDC removed information which had been previously publicly available on their station metadatabase.
Dr. Pielke outlines the issue here.
Given the timing, (five days later after Mr. Karl’s e-mail below), it is now seen as an attempt to thwart the SurfaceStations project. Mr. Karl comments to colleagues in an e-mail about the project:
From: “Thomas.R.Karl”
To: Phil Jones
Subject: Re: FW: retraction request
Date: Tue, 19 Jun 2007 08:21:57 -0400
Cc: Wei-Chyung Wang
Thanks Phil,
We R now responding to a former TV weather forecaster who has got press, He has a Web site of 40 of the USHCN stations showing less than ideal exposure. He claims he can show urban biases and exposure biases. We are writing a response for our Public Affairs. Not sure how it will play out.
Regards, Tom
(From http://www.eastangliaemails.com)
Clearly Mr. Karl viewed the SurfaceStations project as a threat, as this e-mail coincides with the timing of portions of the public metadata disappearing. A week later, after launching some legal arguments against NCDC, they relented and restored access, which I documented here.
=====================
As for the Menne paper itself, I’m rather disturbed by their use of preliminary data at 43 percent, especially since I warned them that the data set they had lifted from my Web site (placed for volunteers to track what had been surveyed, never intended for analysis) had not been quality controlled at the time. Plus there’s really not enough good stations with enough spatial distribution at that sample size. They used it anyway, and amazingly, conducted their own secondary survey of those stations, comparing it to my non-quality-controlled data, implying that my data wasn’t up to par. Well of course it wasn’t! I told them about it and why it wasn’t. We had to resurvey and rerate a number of stations from early in the project. This came about only because it took many volunteers some time to learn how to properly ID them. Even some small towns have 2-3 COOP stations nearby, and only one of them is USHCN. There’s no flag in the NCDC database that says “USHCN”, in fact many volunteers were not even aware of their own station status. Nobody ever bothered to tell them. You’d think if their stations were part of a special subset, somebody at NOAA/NCDC would notify the COOP volunteer so they would have a higher diligence level?
If doing a stations survey was important enough for NCDC to do to compare to my data now for their paper, why didn’t they just do it in the first place?
We currently have 87 percent of the network surveyed (1067 stations out of 1221), and it is quality controlled and checked. I feel that we have enough of the better and urban stations to solve the “low-hanging fruit” problem of the earlier portion of the project. Data at 87 percent looks a lot different than data at 43 percent.
The paper I’m writing with Dr. Pielke and others will make use of this better data, and we also use a different procedure for analysis than what NCDC used.
=====================
Menne mentioned a “counterintuitive” cooling trend in some portions of the data. Interestingly enough, former California State climatologist James Goodridge did an independent analysis of COOP stations in California that had gone through modernization, switching from Stevenson Screens with mercury LIG thermometers to MMTS electronic thermometers. He writes:
Hi Anthony, I found 58 temperature station in California with data for 1949 to 2008 and where the thermometers had been changed to MMTS and the earlier parts were liquid in glass. The average for the earlier part was 59.17F and the MMTS fraction averaged 60.07F.
Jim
A 0.9F (0.5C) warmer offset due to modernization is significant, yet NCDC insists that the MMTS units are tested at about 0.05 cooler. I believe they add this adjustment into the final data. Our experience shows the exact opposite should be done and with a greater magnitude. I can provide Jim’s contact info if interested.
When our paper is completed (and hopefully accepted in a journal), we’ll let science do the comparison on data and methods, and we’ll see how it works out. Could I be wrong? Quite possibly. But everything I’ve seen so far tells me I’m on the right track.
Anthony
The ball is clearly in Mr. Watts’s court. The peer-reviewed literature awaits.